A Forensically Sound Adversary Model for Mobile Devices

نویسندگان

  • Quang Do
  • Ben Martini
  • Kim-Kwang Raymond Choo
چکیده

In this paper, we propose an adversary model to facilitate forensic investigations of mobile devices (e.g. Android, iOS and Windows smartphones) that can be readily adapted to the latest mobile device technologies. This is essential given the ongoing and rapidly changing nature of mobile device technologies. An integral principle and significant constraint upon forensic practitioners is that of forensic soundness. Our adversary model specifically considers and integrates the constraints of forensic soundness on the adversary, in our case, a forensic practitioner. One construction of the adversary model is an evidence collection and analysis methodology for Android devices. Using the methodology with six popular cloud apps, we were successful in extracting various information of forensic interest in both the external and internal storage of the mobile device.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Data recovery from damaged electronic memory devices

High value crime involving mobile phones sometimes require examination of highly damaged, possibly exploded devices, currently beyond the capabilities of existing methods. In the pursuit of a forensically sound method to examine damaged memory devices we are investigating a number of techniques to access and analyse samples. Backside processing involving lapping steps, followed by a BOE / TMAH ...

متن کامل

A Review on Mobile Device's Digital Forensic Process Models

The main purpose of this study is to discuss the different comparative studies on digital forensics process models specially in the field of mobile devices. In order to legally pursue digital criminals, investigation should be conducted in a forensically sound manner so that the acquired evidence would be accepted in the court of law. Digital forensic process models define the important steps t...

متن کامل

Classifier-to-generator Attack: Estimation

Suppose a deep classification model is trained with samples that need to be kept private for privacy or confidentiality reasons. In this setting, can an adversary obtain the private samples if the classification model is given to the adversary? We call this reverse engineering against the classification model the Classifier-toGenerator (C2G) Attack. This situation arises when the classification...

متن کامل

Classifier-to-generator Attack: Estimation

Suppose a deep classification model is trained with samples that need to be kept private for privacy or confidentiality reasons. In this setting, can an adversary obtain the private samples if the classification model is given to the adversary? We call this reverse engineering against the classification model the Classifier-toGenerator (C2G) Attack. This situation arises when the classification...

متن کامل

Classifier-to-generator Attack: Estimation

Suppose a deep classification model is trained with samples that need to be kept private for privacy or confidentiality reasons. In this setting, can an adversary obtain the private samples if the classification model is given to the adversary? We call this reverse engineering against the classification model the Classifier-toGenerator (C2G) Attack. This situation arises when the classification...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره 10  شماره 

صفحات  -

تاریخ انتشار 2015